The source of this notebook for enhancement of picture details is Image Super-Resolution using an Efficient Sub-Pixel CNN.
We implemented this Efficient Sub-Pixel CNN to increase the details of the 2D images input of our 2D to 3D model.
Later in another notebook which is accessible in report links, we tried to fine tune the result of this note book as a .h5 pre-trained model but we were not able to do that because of an error called "group convolution". More explanation regarding that is provided in the other fine tuning notebook.
# importing libraries
import tensorflow as tf
import os
import math
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.utils import load_img
from tensorflow.keras.utils import array_to_img
from tensorflow.keras.utils import img_to_array
from tensorflow.keras.preprocessing import image_dataset_from_directory
from IPython.display import display
2023-08-04 21:26:18.241593: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
The dataset that has been used to train this model is: BSDS500 (Berkeley Segmentation Dataset 500).
This dataset is designed for evaluating natural edge detection that includes not only object contours but also object interior boundaries and background boundaries. It includes 500 natural images with carefully annotated boundaries collected from multiple users.
In order to import this dataset we neede to allow unverified context to solve the error.
Also, the built-in keras.utils.get_file utility was used to retrieve the dataset.
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
dataset_url = "http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/BSR/BSR_bsds500.tgz"
data_dir = keras.utils.get_file(origin=dataset_url, fname="BSR", untar=True)
root_dir = os.path.join(data_dir, "BSDS500/data")
crop_size = 300
upscale_factor = 3
input_size = crop_size // upscale_factor
batch_size = 8
train_ds = image_dataset_from_directory(
root_dir,
batch_size=batch_size,
image_size=(crop_size, crop_size),
validation_split=0.2,
subset="training",
seed=1337,
label_mode=None,
)
valid_ds = image_dataset_from_directory(
root_dir,
batch_size=batch_size,
image_size=(crop_size, crop_size),
validation_split=0.2,
subset="validation",
seed=1337,
label_mode=None,
)
Found 500 files belonging to 1 classes. Using 400 files for training. Found 500 files belonging to 1 classes. Using 100 files for validation.
# scaling
def scaling(input_image):
input_image = input_image / 255.0
return input_image
# Scale from (0, 255) to (0, 1)
train_ds = train_ds.map(scaling)
valid_ds = valid_ds.map(scaling)
for batch in train_ds.take(1):
for img in batch:
display(array_to_img(img))
dataset = os.path.join(root_dir, "images")
test_path = os.path.join(dataset, "test")
test_img_paths = sorted(
[
os.path.join(test_path, fname)
for fname in os.listdir(test_path)
if fname.endswith(".jpg")
]
)
In this part we cropped and resized the images.
First we changed the color space from RGB to YUV.
Directly from source:
For the input data (low-resolution images), we crop the image, retrieve the y channel (luninance), and resize it with the area method (use BICUBIC if you use PIL). We only consider the luminance channel in the YUV color space because humans are more sensitive to luminance change.
For the target data (high-resolution images), we just crop the image and retrieve the y channel.
# Use TF Ops to process:
def process_input(input, input_size, upscale_factor):
input = tf.image.rgb_to_yuv(input)
last_dimension_axis = len(input.shape) - 1
y, u, v = tf.split(input, 3, axis=last_dimension_axis)
return tf.image.resize(y, [input_size, input_size], method="area")
def process_target(input):
input = tf.image.rgb_to_yuv(input)
last_dimension_axis = len(input.shape) - 1
y, u, v = tf.split(input, 3, axis=last_dimension_axis)
return y
train_ds = train_ds.map(
lambda x: (process_input(x, input_size, upscale_factor), process_target(x))
)
train_ds = train_ds.prefetch(buffer_size=32)
valid_ds = valid_ds.map(
lambda x: (process_input(x, input_size, upscale_factor), process_target(x))
)
valid_ds = valid_ds.prefetch(buffer_size=32)
for batch in train_ds.take(1):
for img in batch[0]:
display(array_to_img(img))
for img in batch[1]:
display(array_to_img(img))
def get_model(upscale_factor=3, channels=1):
conv_args = {
"activation": "relu",
"kernel_initializer": "Orthogonal",
"padding": "same",
}
inputs = keras.Input(shape=(None, None, channels))
x = layers.Conv2D(64, 5, **conv_args)(inputs)
x = layers.Conv2D(64, 3, **conv_args)(x)
x = layers.Conv2D(32, 3, **conv_args)(x)
x = layers.Conv2D(channels * (upscale_factor ** 2), 3, **conv_args)(x)
outputs = tf.nn.depth_to_space(x, upscale_factor)
return keras.Model(inputs, outputs)
Directly from source:
We need to define several utility functions to monitor our results:
plot_results to plot an save an image. get_lowres_image to convert an image to its low-resolution version. upscale_image to turn a low-resolution image to a high-resolution version reconstructed by the model. In this function, we use the y channel from the YUV color space as input to the model and then combine the output with the other channels to obtain an RGB image.
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
import PIL
def plot_results(img, prefix, title):
"""Plot the result with zoom-in area."""
img_array = img_to_array(img)
img_array = img_array.astype("float32") / 255.0
# Create a new figure with a default 111 subplot.
fig, ax = plt.subplots()
im = ax.imshow(img_array[::-1], origin="lower")
plt.title(title)
# zoom-factor: 2.0, location: upper-left
axins = zoomed_inset_axes(ax, 2, loc=2)
axins.imshow(img_array[::-1], origin="lower")
# Specify the limits.
x1, x2, y1, y2 = 200, 300, 100, 200
# Apply the x-limits.
axins.set_xlim(x1, x2)
# Apply the y-limits.
axins.set_ylim(y1, y2)
plt.yticks(visible=False)
plt.xticks(visible=False)
# Make the line.
mark_inset(ax, axins, loc1=1, loc2=3, fc="none", ec="blue")
plt.savefig(str(prefix) + "-" + title + ".png")
plt.show()
def get_lowres_image(img, upscale_factor):
"""Return low-resolution image to use as model input."""
return img.resize(
(img.size[0] // upscale_factor, img.size[1] // upscale_factor),
PIL.Image.BICUBIC,
)
def upscale_image(model, img):
"""Predict the result based on input image and restore the image as RGB."""
ycbcr = img.convert("YCbCr")
y, cb, cr = ycbcr.split()
y = img_to_array(y)
y = y.astype("float32") / 255.0
input = np.expand_dims(y, axis=0)
out = model.predict(input)
out_img_y = out[0]
out_img_y *= 255.0
# Restore the image in RGB color space.
out_img_y = out_img_y.clip(0, 255)
out_img_y = out_img_y.reshape((np.shape(out_img_y)[0], np.shape(out_img_y)[1]))
out_img_y = PIL.Image.fromarray(np.uint8(out_img_y), mode="L")
out_img_cb = cb.resize(out_img_y.size, PIL.Image.BICUBIC)
out_img_cr = cr.resize(out_img_y.size, PIL.Image.BICUBIC)
out_img = PIL.Image.merge("YCbCr", (out_img_y, out_img_cb, out_img_cr)).convert(
"RGB"
)
return out_img
Directly from source:
The ESPCNCallback object will compute and display the PSNR metric. This is the main metric we use to evaluate super-resolution performance.
class ESPCNCallback(keras.callbacks.Callback):
def __init__(self):
super().__init__()
self.test_img = get_lowres_image(load_img(test_img_paths[0]), upscale_factor)
# Store PSNR value in each epoch.
def on_epoch_begin(self, epoch, logs=None):
self.psnr = []
def on_epoch_end(self, epoch, logs=None):
print("Mean PSNR for epoch: %.2f" % (np.mean(self.psnr)))
if epoch % 20 == 0:
prediction = upscale_image(self.model, self.test_img)
plot_results(prediction, "epoch-" + str(epoch), "prediction")
def on_test_batch_end(self, batch, logs=None):
self.psnr.append(10 * math.log10(1 / logs["loss"]))
early_stopping_callback = keras.callbacks.EarlyStopping(monitor="loss", patience=10)
checkpoint_filepath = "/tmp/checkpoint"
model_checkpoint_callback = keras.callbacks.ModelCheckpoint(
filepath=checkpoint_filepath,
save_weights_only=True,
monitor="loss",
mode="min",
save_best_only=True,
)
model = get_model(upscale_factor=upscale_factor, channels=1)
model.summary()
callbacks = [ESPCNCallback(), early_stopping_callback, model_checkpoint_callback]
loss_fn = keras.losses.MeanSquaredError()
optimizer = keras.optimizers.Adam(learning_rate=0.001)
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, None, None, 1)] 0
conv2d (Conv2D) (None, None, None, 64) 1664
conv2d_1 (Conv2D) (None, None, None, 64) 36928
conv2d_2 (Conv2D) (None, None, None, 32) 18464
conv2d_3 (Conv2D) (None, None, None, 9) 2601
tf.nn.depth_to_space (TFOp (None, None, None, 1) 0
Lambda)
=================================================================
Total params: 59657 (233.04 KB)
Trainable params: 59657 (233.04 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
epochs = 100
model.compile(
optimizer=optimizer, loss=loss_fn,
)
model.fit(
train_ds, epochs=epochs, callbacks=callbacks, validation_data=valid_ds, verbose=2
)
# The model weights (that are considered the best) are loaded into the model.
model.load_weights(checkpoint_filepath)
Epoch 1/100 Mean PSNR for epoch: 21.69 1/1 [==============================] - 0s 102ms/step
50/50 - 17s - loss: 0.0298 - val_loss: 0.0068 - 17s/epoch - 344ms/step Epoch 2/100 Mean PSNR for epoch: 24.64 50/50 - 17s - loss: 0.0051 - val_loss: 0.0033 - 17s/epoch - 348ms/step Epoch 3/100 Mean PSNR for epoch: 25.54 50/50 - 17s - loss: 0.0036 - val_loss: 0.0029 - 17s/epoch - 334ms/step Epoch 4/100 Mean PSNR for epoch: 26.23 50/50 - 17s - loss: 0.0031 - val_loss: 0.0027 - 17s/epoch - 332ms/step Epoch 5/100 Mean PSNR for epoch: 25.97 50/50 - 16s - loss: 0.0030 - val_loss: 0.0026 - 16s/epoch - 328ms/step Epoch 6/100 Mean PSNR for epoch: 26.20 50/50 - 17s - loss: 0.0029 - val_loss: 0.0025 - 17s/epoch - 332ms/step Epoch 7/100 Mean PSNR for epoch: 26.24 50/50 - 17s - loss: 0.0028 - val_loss: 0.0025 - 17s/epoch - 334ms/step Epoch 8/100 Mean PSNR for epoch: 26.18 50/50 - 16s - loss: 0.0028 - val_loss: 0.0025 - 16s/epoch - 329ms/step Epoch 9/100 Mean PSNR for epoch: 26.35 50/50 - 17s - loss: 0.0029 - val_loss: 0.0025 - 17s/epoch - 336ms/step Epoch 10/100 Mean PSNR for epoch: 26.20 50/50 - 17s - loss: 0.0028 - val_loss: 0.0024 - 17s/epoch - 345ms/step Epoch 11/100 Mean PSNR for epoch: 26.32 50/50 - 22s - loss: 0.0027 - val_loss: 0.0024 - 22s/epoch - 448ms/step Epoch 12/100 Mean PSNR for epoch: 26.05 50/50 - 18s - loss: 0.0027 - val_loss: 0.0024 - 18s/epoch - 358ms/step Epoch 13/100 Mean PSNR for epoch: 26.21 50/50 - 18s - loss: 0.0028 - val_loss: 0.0024 - 18s/epoch - 359ms/step Epoch 14/100 Mean PSNR for epoch: 26.40 50/50 - 18s - loss: 0.0027 - val_loss: 0.0024 - 18s/epoch - 356ms/step Epoch 15/100 Mean PSNR for epoch: 26.47 50/50 - 18s - loss: 0.0027 - val_loss: 0.0024 - 18s/epoch - 357ms/step Epoch 16/100 Mean PSNR for epoch: 26.41 50/50 - 18s - loss: 0.0027 - val_loss: 0.0024 - 18s/epoch - 352ms/step Epoch 17/100 Mean PSNR for epoch: 25.82 50/50 - 18s - loss: 0.0029 - val_loss: 0.0024 - 18s/epoch - 353ms/step Epoch 18/100 Mean PSNR for epoch: 26.41 50/50 - 18s - loss: 0.0027 - val_loss: 0.0024 - 18s/epoch - 351ms/step Epoch 19/100 Mean PSNR for epoch: 26.77 50/50 - 18s - loss: 0.0027 - val_loss: 0.0024 - 18s/epoch - 352ms/step Epoch 20/100 Mean PSNR for epoch: 26.58 50/50 - 23s - loss: 0.0027 - val_loss: 0.0024 - 23s/epoch - 467ms/step Epoch 21/100 Mean PSNR for epoch: 26.52 1/1 [==============================] - 0s 57ms/step
50/50 - 22s - loss: 0.0026 - val_loss: 0.0024 - 22s/epoch - 434ms/step Epoch 22/100 Mean PSNR for epoch: 26.22 50/50 - 23s - loss: 0.0026 - val_loss: 0.0023 - 23s/epoch - 460ms/step Epoch 23/100 Mean PSNR for epoch: 26.60 50/50 - 21s - loss: 0.0027 - val_loss: 0.0023 - 21s/epoch - 422ms/step Epoch 24/100 Mean PSNR for epoch: 26.56 50/50 - 18s - loss: 0.0027 - val_loss: 0.0024 - 18s/epoch - 353ms/step Epoch 25/100 Mean PSNR for epoch: 27.01 50/50 - 18s - loss: 0.0026 - val_loss: 0.0023 - 18s/epoch - 353ms/step Epoch 26/100 Mean PSNR for epoch: 26.30 50/50 - 18s - loss: 0.0026 - val_loss: 0.0023 - 18s/epoch - 353ms/step Epoch 27/100 Mean PSNR for epoch: 27.18 50/50 - 18s - loss: 0.0026 - val_loss: 0.0023 - 18s/epoch - 353ms/step Epoch 28/100 Mean PSNR for epoch: 26.40 50/50 - 18s - loss: 0.0026 - val_loss: 0.0024 - 18s/epoch - 352ms/step Epoch 29/100 Mean PSNR for epoch: 26.63 50/50 - 18s - loss: 0.0026 - val_loss: 0.0023 - 18s/epoch - 352ms/step Epoch 30/100 Mean PSNR for epoch: 26.43 50/50 - 18s - loss: 0.0026 - val_loss: 0.0023 - 18s/epoch - 353ms/step Epoch 31/100 Mean PSNR for epoch: 26.01 50/50 - 17s - loss: 0.0028 - val_loss: 0.0024 - 17s/epoch - 346ms/step Epoch 32/100 Mean PSNR for epoch: 26.44 50/50 - 17s - loss: 0.0027 - val_loss: 0.0023 - 17s/epoch - 349ms/step Epoch 33/100 Mean PSNR for epoch: 26.89 50/50 - 17s - loss: 0.0027 - val_loss: 0.0023 - 17s/epoch - 349ms/step Epoch 34/100 Mean PSNR for epoch: 26.50 50/50 - 17s - loss: 0.0026 - val_loss: 0.0023 - 17s/epoch - 348ms/step Epoch 35/100 Mean PSNR for epoch: 26.69 50/50 - 18s - loss: 0.0026 - val_loss: 0.0023 - 18s/epoch - 351ms/step Epoch 36/100 Mean PSNR for epoch: 26.83 50/50 - 17s - loss: 0.0026 - val_loss: 0.0023 - 17s/epoch - 346ms/step Epoch 37/100 Mean PSNR for epoch: 26.58 50/50 - 17s - loss: 0.0026 - val_loss: 0.0023 - 17s/epoch - 346ms/step Epoch 38/100 Mean PSNR for epoch: 26.76 50/50 - 17s - loss: 0.0026 - val_loss: 0.0023 - 17s/epoch - 347ms/step Epoch 39/100 Mean PSNR for epoch: 26.65 50/50 - 17s - loss: 0.0026 - val_loss: 0.0023 - 17s/epoch - 347ms/step Epoch 40/100 Mean PSNR for epoch: 26.40 50/50 - 18s - loss: 0.0026 - val_loss: 0.0023 - 18s/epoch - 357ms/step Epoch 41/100 Mean PSNR for epoch: 26.45 1/1 [==============================] - 0s 47ms/step
50/50 - 18s - loss: 0.0026 - val_loss: 0.0023 - 18s/epoch - 364ms/step Epoch 42/100 Mean PSNR for epoch: 26.68 50/50 - 19s - loss: 0.0026 - val_loss: 0.0023 - 19s/epoch - 381ms/step Epoch 43/100 Mean PSNR for epoch: 26.80 50/50 - 17s - loss: 0.0026 - val_loss: 0.0023 - 17s/epoch - 346ms/step Epoch 44/100 Mean PSNR for epoch: 26.84 50/50 - 17s - loss: 0.0026 - val_loss: 0.0023 - 17s/epoch - 349ms/step Epoch 45/100 Mean PSNR for epoch: 26.48 50/50 - 17s - loss: 0.0025 - val_loss: 0.0023 - 17s/epoch - 349ms/step Epoch 46/100 Mean PSNR for epoch: 26.26 50/50 - 17s - loss: 0.0025 - val_loss: 0.0023 - 17s/epoch - 349ms/step Epoch 47/100 Mean PSNR for epoch: 26.58 50/50 - 18s - loss: 0.0025 - val_loss: 0.0023 - 18s/epoch - 350ms/step Epoch 48/100 Mean PSNR for epoch: 26.32 50/50 - 17s - loss: 0.0027 - val_loss: 0.0023 - 17s/epoch - 346ms/step Epoch 49/100 Mean PSNR for epoch: 26.54 50/50 - 17s - loss: 0.0025 - val_loss: 0.0023 - 17s/epoch - 346ms/step Epoch 50/100 Mean PSNR for epoch: 26.42 50/50 - 17s - loss: 0.0025 - val_loss: 0.0023 - 17s/epoch - 347ms/step Epoch 51/100 Mean PSNR for epoch: 26.67 50/50 - 17s - loss: 0.0025 - val_loss: 0.0023 - 17s/epoch - 347ms/step Epoch 52/100 Mean PSNR for epoch: 26.45 50/50 - 17s - loss: 0.0025 - val_loss: 0.0023 - 17s/epoch - 348ms/step Epoch 53/100 Mean PSNR for epoch: 26.91 50/50 - 17s - loss: 0.0025 - val_loss: 0.0023 - 17s/epoch - 345ms/step Epoch 54/100 Mean PSNR for epoch: 26.56 50/50 - 17s - loss: 0.0025 - val_loss: 0.0023 - 17s/epoch - 346ms/step Epoch 55/100 Mean PSNR for epoch: 26.91 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 345ms/step Epoch 56/100 Mean PSNR for epoch: 26.81 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 347ms/step Epoch 57/100 Mean PSNR for epoch: 26.70 50/50 - 17s - loss: 0.0025 - val_loss: 0.0023 - 17s/epoch - 349ms/step Epoch 58/100 Mean PSNR for epoch: 26.45 50/50 - 18s - loss: 0.0025 - val_loss: 0.0023 - 18s/epoch - 354ms/step Epoch 59/100 Mean PSNR for epoch: 26.82 50/50 - 17s - loss: 0.0026 - val_loss: 0.0023 - 17s/epoch - 350ms/step Epoch 60/100 Mean PSNR for epoch: 26.45 50/50 - 17s - loss: 0.0027 - val_loss: 0.0022 - 17s/epoch - 347ms/step Epoch 61/100 Mean PSNR for epoch: 26.75 1/1 [==============================] - 0s 46ms/step
50/50 - 18s - loss: 0.0025 - val_loss: 0.0022 - 18s/epoch - 368ms/step Epoch 62/100 Mean PSNR for epoch: 26.27 50/50 - 19s - loss: 0.0025 - val_loss: 0.0022 - 19s/epoch - 384ms/step Epoch 63/100 Mean PSNR for epoch: 26.37 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 347ms/step Epoch 64/100 Mean PSNR for epoch: 27.33 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 349ms/step Epoch 65/100 Mean PSNR for epoch: 26.89 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 344ms/step Epoch 66/100 Mean PSNR for epoch: 25.87 50/50 - 17s - loss: 0.0027 - val_loss: 0.0026 - 17s/epoch - 346ms/step Epoch 67/100 Mean PSNR for epoch: 26.67 50/50 - 18s - loss: 0.0026 - val_loss: 0.0022 - 18s/epoch - 356ms/step Epoch 68/100 Mean PSNR for epoch: 26.70 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 345ms/step Epoch 69/100 Mean PSNR for epoch: 26.46 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 348ms/step Epoch 70/100 Mean PSNR for epoch: 26.71 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 346ms/step Epoch 71/100 Mean PSNR for epoch: 26.62 50/50 - 17s - loss: 0.0025 - val_loss: 0.0023 - 17s/epoch - 343ms/step Epoch 72/100 Mean PSNR for epoch: 26.90 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 345ms/step Epoch 73/100 Mean PSNR for epoch: 26.73 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 348ms/step Epoch 74/100 Mean PSNR for epoch: 26.54 50/50 - 18s - loss: 0.0025 - val_loss: 0.0022 - 18s/epoch - 354ms/step Epoch 75/100 Mean PSNR for epoch: 26.88 50/50 - 18s - loss: 0.0025 - val_loss: 0.0022 - 18s/epoch - 351ms/step Epoch 76/100 Mean PSNR for epoch: 26.55 50/50 - 18s - loss: 0.0025 - val_loss: 0.0022 - 18s/epoch - 351ms/step Epoch 77/100 Mean PSNR for epoch: 24.51 50/50 - 17s - loss: 0.0028 - val_loss: 0.0036 - 17s/epoch - 344ms/step Epoch 78/100 Mean PSNR for epoch: 26.89 50/50 - 17s - loss: 0.0029 - val_loss: 0.0022 - 17s/epoch - 346ms/step Epoch 79/100 Mean PSNR for epoch: 26.93 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 347ms/step Epoch 80/100 Mean PSNR for epoch: 27.01 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 345ms/step Epoch 81/100 Mean PSNR for epoch: 26.90 1/1 [==============================] - 0s 47ms/step
50/50 - 18s - loss: 0.0025 - val_loss: 0.0022 - 18s/epoch - 367ms/step Epoch 82/100 Mean PSNR for epoch: 26.66 50/50 - 19s - loss: 0.0025 - val_loss: 0.0022 - 19s/epoch - 382ms/step Epoch 83/100 Mean PSNR for epoch: 26.85 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 342ms/step Epoch 84/100 Mean PSNR for epoch: 26.70 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 345ms/step Epoch 85/100 Mean PSNR for epoch: 26.85 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 344ms/step Epoch 86/100 Mean PSNR for epoch: 26.18 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 345ms/step Epoch 87/100 Mean PSNR for epoch: 26.51 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 345ms/step Epoch 88/100 Mean PSNR for epoch: 26.40 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 348ms/step Epoch 89/100 Mean PSNR for epoch: 26.58 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 341ms/step Epoch 90/100 Mean PSNR for epoch: 26.39 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 348ms/step Epoch 91/100 Mean PSNR for epoch: 26.36 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 347ms/step Epoch 92/100 Mean PSNR for epoch: 26.84 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 350ms/step Epoch 93/100 Mean PSNR for epoch: 26.81 50/50 - 18s - loss: 0.0025 - val_loss: 0.0023 - 18s/epoch - 351ms/step Epoch 94/100 Mean PSNR for epoch: 26.65 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 344ms/step Epoch 95/100 Mean PSNR for epoch: 26.09 50/50 - 17s - loss: 0.0025 - val_loss: 0.0024 - 17s/epoch - 343ms/step Epoch 96/100 Mean PSNR for epoch: 26.47 50/50 - 17s - loss: 0.0027 - val_loss: 0.0022 - 17s/epoch - 345ms/step Epoch 97/100 Mean PSNR for epoch: 26.39 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 346ms/step Epoch 98/100 Mean PSNR for epoch: 26.33 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 343ms/step Epoch 99/100 Mean PSNR for epoch: 26.58 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 344ms/step Epoch 100/100 Mean PSNR for epoch: 27.09 50/50 - 17s - loss: 0.0025 - val_loss: 0.0022 - 17s/epoch - 345ms/step
<tensorflow.python.checkpoint.checkpoint.CheckpointLoadStatus at 0x16a616e90>
total_bicubic_psnr = 0.0
total_test_psnr = 0.0
for index, test_img_path in enumerate(test_img_paths[50:60]):
img = load_img(test_img_path)
lowres_input = get_lowres_image(img, upscale_factor)
w = lowres_input.size[0] * upscale_factor
h = lowres_input.size[1] * upscale_factor
highres_img = img.resize((w, h))
prediction = upscale_image(model, lowres_input)
lowres_img = lowres_input.resize((w, h))
lowres_img_arr = img_to_array(lowres_img)
highres_img_arr = img_to_array(highres_img)
predict_img_arr = img_to_array(prediction)
bicubic_psnr = tf.image.psnr(lowres_img_arr, highres_img_arr, max_val=255)
test_psnr = tf.image.psnr(predict_img_arr, highres_img_arr, max_val=255)
total_bicubic_psnr += bicubic_psnr
total_test_psnr += test_psnr
print(
"PSNR of low resolution image and high resolution image is %.4f" % bicubic_psnr
)
print("PSNR of predict and high resolution is %.4f" % test_psnr)
plot_results(lowres_img, index, "lowres")
plot_results(highres_img, index, "highres")
plot_results(prediction, index, "prediction")
print("Avg. PSNR of lowres images is %.4f" % (total_bicubic_psnr / 10))
print("Avg. PSNR of reconstructions is %.4f" % (total_test_psnr / 10))
1/1 [==============================] - 0s 52ms/step PSNR of low resolution image and high resolution image is 30.0157 PSNR of predict and high resolution is 30.8130
1/1 [==============================] - 0s 42ms/step PSNR of low resolution image and high resolution image is 25.1103 PSNR of predict and high resolution is 26.1826
1/1 [==============================] - 0s 92ms/step PSNR of low resolution image and high resolution image is 27.7789 PSNR of predict and high resolution is 28.5335
1/1 [==============================] - 0s 50ms/step PSNR of low resolution image and high resolution image is 28.0321 PSNR of predict and high resolution is 28.3598
1/1 [==============================] - 0s 40ms/step PSNR of low resolution image and high resolution image is 25.7853 PSNR of predict and high resolution is 26.4712
1/1 [==============================] - 0s 49ms/step PSNR of low resolution image and high resolution image is 25.9181 PSNR of predict and high resolution is 26.8109
1/1 [==============================] - 0s 49ms/step PSNR of low resolution image and high resolution image is 26.2389 PSNR of predict and high resolution is 27.2527
1/1 [==============================] - 0s 93ms/step PSNR of low resolution image and high resolution image is 23.3281 PSNR of predict and high resolution is 24.7216
1/1 [==============================] - 0s 43ms/step PSNR of low resolution image and high resolution image is 29.9008 PSNR of predict and high resolution is 30.1995
1/1 [==============================] - 0s 61ms/step PSNR of low resolution image and high resolution image is 25.2492 PSNR of predict and high resolution is 25.8393
Avg. PSNR of lowres images is 26.7357 Avg. PSNR of reconstructions is 27.5184
model.save("../Image_Enhancement_before_finetuning.h5")
/Users/kavian/Desktop/venv/venv/tensorflow_cpu/lib/python3.11/site-packages/keras/src/engine/training.py:3000: UserWarning: You are saving your model as an HDF5 file via `model.save()`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')`.
saving_api.save_model(